Time Complexity

Time Complexity

See also: [[Space Complexity]]

Describes the amount of computer time it takes to run an algorithm. It is usually estimated by counting the number of elementary operations performed.

Since an algorithms running time may different depending on a certain input, you'd usually consider the worst-case time complexity i.e the maximum amount of time. Another option is to use average-case complexity.

Commonly expressed in big O notation: O(n)O(n), O(nlogn)O(n \log n), O(na)O(n ^a), O(2n)O(2^n), etc...

Complexities are classified according to the big O notation, for example an algorithm with time complexity is a linear time algorithm.

Constant time

An algorithm is constant time (O(1)O(1)) if the computer time does not depend on the size of the input. For example, accessing a single element in an array takes constant time as only one operation is performed to locate it. Similarly, finding the smallest value in a sorted array also takes O(n)O(n) time as we know it is the first element.

Despite being called constant time, the time doesn't have to be completely independent of problem size - but the upper bound running time has to be bounded independently of the problem size.

Source

Logarithmic time

An algorithm takes logarithmic time when its big O notation is O(logn)O(\log n). Algorithms that take logarithmic time are considered to be highly efficient as the ratio of operations to the size of input decreases as nn increases.

Examples of logarithmic algorithms are often found in operations on binary tress or when using binary search, since these algorithms do not need to access all elements in order to finish.

For example, a search for a number in a sorted list of numbers:

Quadratic time

A quadratic algorithm has a big O notation of O(n2)O(n^2). Simple comparison based sorting algorithms are quadratic (e.g. insertion sort).

An example of searching for duplicate names in a string array:

There are n1n-1 iterations of the outer loop. On each iteration, the inner loop iterates ni1n-i-1 times. So in total the inner loop iterates n1+n2+...+1n-1 + n-2 + ... + 1 times. So the number of times that [the inner function] executes is equal to the sum of the numbers from 1 to n-1. That sum is n(n1)/2n*(n-1)/2 , which is in T(n2)T(n^2) and thus also in O(n2)O(n^2) .

Source

Exponential time

An example of an algorithm that has a big O of O(2n)O(2^n) is a recursive Fibonacci function:

Time Complexity